Advertisement

Amazon says its next-gen chips are four times faster for AI training

It’s also expanding its relationship with NVIDIA and launching a business-oriented chatbot.

Amazon

Amazon Web Services (AWS) just kicked off its Las Vegas-based re:Invent conference with a stream of announcements, most of which involve the year’s most popular technology, AI. These news items, taken as a whole, give us a sneak peek at the company’s long-term goals for artificial intelligence platforms.

First of all, AWS unveiled its latest-generation of AI chips intended for model training and for running trained models. Trainium2, which is obviously for model training, has been designed to deliver up to 4x better performance and 2x energy efficiency when compared to its forebear. Amazon promises these chips will allow programmers to train models quickly and at a lower cost, due to a reduction in energy use. Anthropic, an Amazon-backed OpenAI competitor, has already announced plans to build models using Trainium2 chips.

Graviton4, on the other hand, is more for general use. These processors are based on Arm architecture, but consume less energy than Intel or AMD chips. Amazon promises an increase of 30 percent in general performance when using a trained AI model embedded within a Graviton4 processor. This should lower cloud-computing costs for organizations that regularly employ AI models and offer a slight uptick in speed for regular users just looking to make some fake photos of Harry Potter at a rave or whatever.

All told, Graviton4 should allow AWS customers to “process larger amounts of data, scale their workloads, improve time-to-results and lower their total cost of ownership.” It’s available today in preview with a wider release planned for the coming months.

Typically, when a company announces new in-house chips, that spells trouble for current third-party providers like NVIDIA. The company is a huge player in the enterprise AI space, thanks to companies using its GPUs for training and its Arm-based datacenter CPU Grace. Instead of eschewing the partnership in favor of proprietary chips, Amazon is further cementing the relationship by offering enterprise customers cloud access to NVIDIA’s latest H200 AI GPUs. It’ll also operate more than 16,000 Nvidia GH200 Grace Hopper Superchips expressly for NVIDIA’s research and development team. This is a similar approach to its chief AI rival, Microsoft, which also announced an enhanced partnership with NVIDIA at the same time it revealed its proprietary AI chip, Maia 100.

Amazon also announced a new business-focused AI chatbot called Q, a name that was likely inspired by the Star Trek demigod and not the Trump-adjacent conspiracy peddler. It’s described as a “new type of generative AI-powered personal assistant” and is specifically designed to help streamline work projects and customer service tasks. It can be tailored to suit any business and offers relevant answers to commonly-asked questions. Amazon Q can also generate content on its own and take actions based on customer requests. It’ll even customize interactions based on a user’s role within a company.

It’ll exist on communication apps like Slack and in text-editing applications commonly-used by software developers. To that end, Q can actually change source code and can connect to more than 40 enterprise systems, including Microsoft 365, Dropbox, Salesforce and Zendesk, among others. Amazon Q is currently available in preview, with a wider release coming soon. It’ll cost anywhere from $20 to $30 per user each month, depending on available features.

So what have we learned here? Amazon is betting big on AI, like everyone else. More specifically, it’s battling with old cloud rival Microsoft to be the go-to company for enterprise-based AI. It’s also using AI to continue its dominance in the cloud computing space, hoping to minimize any increase in market share for Microsoft and other players like Google and Alibaba.